(navigation image)
Home American Libraries | Canadian Libraries | Universal Library | Community Texts | Project Gutenberg | Children's Library | Biodiversity Heritage Library | Additional Collections
Search: Advanced Search
Anonymous User (login or join us)
Upload
See other formats

Full text of "Control theory with applications to naval hydrodynamics"

*m 



r 



DAVID W. TAYLOR NAVAL SHIP 
RESEARCH AND DEVELOPMENT CENTER 




Bethesda, Md. 20084 




CONTROL THEORY WITH APPLICATIONS TO 

NAVAL HYDRODYNAMICS 

THE FIRST 

DAVID W. TAYLOR LECTURES 

APRIL 1972 



by 



Professor Dr. Reinier Timman 



Technische Hogeschool 
Delft, Netherlands 



Notes by 
Thomas J. Langan 

APPROVED FOR PUBLIC RELEASE: DISTRIBUTION UNLIMITED 




December 1975 



Report 4397 



J 



MAJOR DTNSRDC ORGANIZATIONAL COMPONENTS 



DTNSRDC 

COMMANDER 



00 



TECHNICAL DIRECTOR 
01 



OFFICER-IN-CHARGE 
CARDEROCK 

05 



SYSTEMS 
DEVELOPMENT 
DEPARTMENT 



11 



OFFICER-IN-CHARGE 

ANNAPOLIS 

04 





UNCLASSIFIED 



SECURITY CLASSIFICATION OF THIS PAGE (When Dala Entered) 



REPORT DOCUMENTATION PAGE 



READ INSTRUCTIONS 
BEFORE COMPLETING FORM 



I. REPORT NUMBER 

4397 



2. GOVT ACCESSION NO 



3. RECIPIENT'S CATALOG NUMBER 



4. TITLE (and Subtitle) 

CONTROL THEORY WITH APPLICATIONS TO NAVAL HYDRO- 
DYNAMICS: THE FIRST DAVID W. TAYLOR 
LECTURES — APRIL 1972 



5. TYPE OF REPORT & PERIOD COVERED 



6. PERFORMING ORG. REPORT NUMBER 



7. AUTHORfs) 



8. CONTRACT OR GRANT NUMBERfs; 



Reinier Timman (Technische Hogeschool, 
Delft, Netherlands) 



9. PERFORMING ORGANIZATION NAME AND ADDRESS 



Work Unit 
4-1500-001-43/1509 



CONTROLLING OFFICE NAME AND ADDRESS 

David W. Taylor Naval Ship R&D Center 
Bethesda, Md. 20084 



12. REPORT DATE 

December 1975 



'3. NUMBER OF PAGES 

75 



14. MONITORING AGENCY NAME a ADDRESSff/ different from Controlling Office) 



15. SECURITY CLASS, (of this report) 

UNCLASSIFIED 



16. DISTRIBUTION ST ATEM EN T (of this Report) 



Approved for Public Release: Distribution Unlimited 



17. DISTRIBUTION STATEMENT (of the abstract entered In Block 20, II different from Report) 



18. SUPPLEMENTARY NOTES 

Material contained in this report was presented in four lectures by 
Dr. Timman. Collation and technical editing was done by Dr. T. J. Langan, 
DTNSRDC Code 1552. 



19. KEY WORDS (Continue on reverse side It necessary and Identify by block number) 

Modern control theory Stochastic systems 

Calculus of variations Kalman-Bucy filter solution 

Method of dynamic programming 



20. ABSTRACT (Continue on reverse side If necessary and Identify by block number) 

The lectures present an introduction to modern control theory. 
Calculus of variations is used to study the problem of determining the 
optimal control for a deterministic system without constraints and for one 
with constraints. The method of dynamic programming is also used to solve 
the unconstrained control problem. Stochastic systems are introduced, 
and the Kalman-Bucy filter is derived. 



DD 1 JAN 73 1473 EDITION OF 1 NOV 65 IS OBSOLETE 

S/N 0102-014-6601 



UNCLASSIFIED 



SECURITY CLASSIFICATION OF THIS PAGE (IVhen Data Entered) 



UNCLASSIFIED 



-H.IJWITY CLASSIFICATION OF THIS FAGEf 



UNCLASSIFIED 

SECURITY CLASSIFICATION OF THIS P AGE(When Dele Entered) 



TABLE OF CONTENTS 

Page 

ABSTRACT 1 

INTRODUCTION 1 

THE OPTIMAL CONTROL PROBLEM 6 

RELATION TO DYNAMIC PROGRAMMING 23 

CONSTRAINTS ON THE CONTROL AND STATE VARIABLES 26 

STOCHASTIC SYSTEMS 40 

THE KALMAN-BUCY FILTER 53 

LIST OF FIGURES 

1 — Geometry of the Proof 17 

2 — Constant Cost Fronts 23 

3 — Constrained Variables 28 

4 — Optimal Trajectory 36 

5 — Switching Curve 36 

6 — Time Fronts 39 

7 — Stochastic Control System 41 

8 — Optimal Filter 62 



PREFACE 

The David W. Taylor Lectures were initiated as a living memorial to 
our founder, in recognition of his many contributions to the science of 
naval architecture and naval hydromechanics. His systematic investiga- 
tion of resistance of ship hulls is universally known and used, but of 
equal importance was his use of hydrodynamic theory to solve practical 
problems. Many of the experimental techniques which he pioneered are 
still in use today (for example, the use of a spherical pitot tube for 
exploring the structure of a wake field). The system of mathematical 
lines developed by Taylor was used to develop many designs for the Navy 
long before the computer was invented. And perhaps most important of 
all, he established a tradition of applied scientific research at the 
"Model Basin" which has been carefully nurtured through the decades, and 
which we treasure and protect today. 

These lectures were conceived to support and strengthen this 
tradition. We will invite eminent scientists in fields closely related 
to the Center's work to spend a few weeks with us, to consult with and 
advise our working staff, and to give lectures on subjects of current 
interest. 

It is most fitting that Professor Reinier Timman, mathematician and 
philosopher, initiate this series. He has long been a friend and on 
several occasions has used the Center for a retreat, to his benefit and 
ours. He has inspired and advised our staff and cooperated in our work. 
His students at Delft have made leading contributions to the development 
of modern naval hydrodynamics. Professor Timman' s belief that mathe- 
matics can contribute powerfully to our technology is much in the David 
Taylor tradition. We are honored that he agreed to give the first in 
this David W. Taylor Lecture Series. 



W. E. CUMMINS 



FOREWORD 

It is great honor to me to be invited to give the first 
in the series of David W. Taylor Lectures. My associations 
with the Model Basin date from a long time ago, and a visit 
to the United States is for me not a real visit unless I 
have the opportunity to taste once more the stimulating 
atmosphere which not only gives the Model Basin an out 
standing place in hydrodynamical research but also acts as 
a breeding ground where nearly all outstanding people in the 
field passed an essential period in their lives. So I am 
extremely grateful to have been given the opportunity once 
more to spend some time at this most interesting place and 
to participate in its work. I wish to express my gratitude 
to Justin McCarthy who originated the idea of the lectures 
and to all other friends who made this period a success. 
In particular, I am pleased that Dr. Langan, whom I used to 
know as a promising undergraduate student, did a fine job 
in editing the lectures. 



R. TIMMAN 







vi 



ABSTRACT 
The lectures present an introduction to modern control 
theory. Calculus of variations is used to study the problem 
of determining the optimal control for a deterministic sys- 
tem without constraints and for one with constraints. The 
method of dynamic programming is also used to solve the 
unconstrained control problem. Stochastic systems are intro- 
duced, and the Kalman-Bucy filter is derived. 

INTRODUCTION 

Optimal control theory is involved with the great human effort to 

control or influence processes of one type or another. The objectives 

and criteria for the performance of a physical system may be diffused or 

defy tractable analysis in many situations, but the basic concepts on 

which to proceed have been established in control theory. One first 

considers a system and a process through which the state of the system 

is changing in time; in other words, some action or motion of the system 

takes place in time. This behavior of the system is described by a set 

of time-dependent variables x (t) = (x.. , . . . , x ) which are called 

the state variables. In addition to the state of the system, one also 

considers controls by which the process in question can be influenced. 

These controls are represented by a set of variables u (t) = 

(u-t (t) , . . . , u (t)) which are called the control variables. 
1 m 

At a certain instant in time, say t„, the state of the system is 

li 

known to be x n . If an analysis of the system is to be performed, a sys- 



"0 



tem of equations must be specified which predict the state for t > t. 
and for a given control function _u. ' These equations are called the 
dynamic equations for the system; they may take the form of an ordinary 
differential equation 

X (t) = f (t, x, u) 



or a difference equation 

X - = f (t , X , u ) 

n+1 n n n 

They might even take the form of an integro-dif ferential-dif ference 
equation or a time delay equation, but they cannot take on a form such 
that, the solution at some time t is dependent on the solution in the 
future, t > t . The dynamic equations must reflect this principle of 
nonanticipation. One does not violate this principle by choosing a 
control in anticipation of the future and thus influencing the future 
state of the system based on estimated future information; in fact, the 
choice of such a control is actually based on the history of the state 
of the system available at the time of the choice. 

If no further specification of system performance is given, every 
control function which yielded a physical realizable state of the system 
for t > t„ would be a solution to the control problem. One can have a 
meaningful control problem only if there is a desired objective, a goal 
to be achieved by the process. Moreover, it is not sufficient merely to 
have a goal; there must be a control by which this goal can be achieved. 
This control could be the case of no control, f(t, x, u) = f(t, x) ; 
however, it must exist. Since it is not the purpose of these notes to 
delve into all the mathematical problems, it will be assumed that there 
exists at least one control by which the objective can be achieved. It 
will further be assumed that any control function used in the sequel 
yields a unique state function x (t) with x (t„) = x • the state func- 
tion is obtained by solving the dynamic equations. 

In general, there are a number of controls which could yield the 
desired system state. From among this set of possible controls, one 
would like to choose the "best" control with respect to some performance 
criterion. For example, one would like to choose the control so that 
the process is carried out with a minimum cost in fuel, or time, or 
money. In the sequel, it is assumed that the performance criterion can 
be expressed in terms of a cost function; furthermore, it is assumed 
that the cost function is additive with respect to the contribution from 
each time interval. An example of such a cost function is 



G(x T , T) + I F(a, x, u) da 
C 

where x = x(T) . This cost function is dependent on the final state of 
the system through the function G and on the intermediate states and the 
control function through the function F. The additive property of the 
control function with respect to the intermediate times is represented 
by the integral. By an optimal control is meant that control which 
minimizes the cost function; it is this function which is the desired 
result of optimal control theory. 

Any process that is being controlled is subject to unpredicted 
disturbances, and these can make a significant difference in the choice 
of a control function. Suppose the dynamic equations of a system is 
given by the differential equation 



f ="*'« 



where p(t) represents a disturbance. The behavior of the system in 
response to the two different controls (u 1 = - x) and (u„ = - e ) does 
not differ if there is no disturbance (p = 0) ; however, if a disturbance 
is present, the response is significantly different. If x_ = 1, the 
response to the first control is given by 



* f 



3 t + e t e°p(c ) do 
J 



whereas the response to the second control is 






x = e + p(a) da 
; 



Such differences could conceivably result in a different choice for an 
optimal control. 



In analyzing systems and their control, one must find a way to 
represent the unpredictable disturbances. Such disturbances cannot be 
modeled by analytic functions since the value of an analytic function at 
any point is predictable from its value on an arbitrary short interval. 
One answer to modeling these disturbances is to describe them as stochastic 
processes. The theory of such processes was developed to model the 
fluctuation observed in physical systems. Wiener processes or the 
Brownian motion process are of particular interest to the stochastic 
control problem; many of the disturbances that affect a control system 
can be modeled by processes generated from Wiener processes. A Wiener 
process is a stochastic process in which the statistical properties over 
the interval (t, t+x) are the same as those over the interval (s, s+t) ; 
moreover, the behavior of the process is independent over time intervals 
which do not overlap, and there is no trend in the behavior. 

Once the stochastic disturbances have been introduced into the 

control theory, the problem is no longer deterministic. The state 

variables and control variables are no longer predictable but must be 

2 
described by their statistical properties. Kalman and Bucy provide a 

solution to the stochastic control problem for nonstationary linear 
systems. Their solution consists of using an optimal filter to estimate 
from the observed system performance the state of the system in terms of 
the conditional mean; the estimated state is fed back to the control 
signal through linear feedback. The linear feedback is determined by 
solving a deterministic control problem; the filter depends on the 
disturbances and on the system dynamics, but it is independent of the 
cost. Although the nonlinear stochastic control problem or its equiva- 
lent, the nonlinear filter problem, has not been solved, some headway 

3 
has been made by Bucy and Joseph; this lecture considers only the 

linear problem. 



Astrom, K. J., "Introduction to Stochastic Control Theory," Academic 
Press, Inc., New York (1970). 

2 
Kalman, R. E. and R. S. Bucy, "New Results in Linear Filtering and 

Prediction Theory," Journal of Basic Engineering Series D, American 

Society of Mechanical Engineers, Vol. 83, pp. 95 — 108 (1961). 

3 
Bucy, R. S. and P. D. Joseph, "Filtering for Stochastic Processes with 

Applications to Guidance," Interscience Publishers, Inc., New York (1968). 

4 



As an example of a control problem, consider a ship moving through 
a current of water; the ship is a system undergoing a change in state. 
In this example, the state is the position (x, y) of the ship. The 
parameters which control the motion of the ship are the power, which 
determines the velocity relative to the water, and the steerage angle, 
which controls the heading angle 9. In this simplification of the 
system, the dynamic equations are: 

x = V cos + u(x, y) 

y = V sin 8 + v(x, y) 

where u and v are the velocity of the current in the x- and y-directions, 
respectively. The goal might be to go from point A to point B. If it 
is desired to reach B in the shortest possible time, the cost function 
would be the accumulated time; if it is desired to reach B with the 
minimum expenditure in fuel, the cost function would give the expended 
fuel in terms of x, y, V, and 0. A more complicated cost function would 
result if it is desired to reach B in the least time with a reasonable 
expenditure of fuel. Both the power and steerage angle could be subject 
to unpredictable perturbations; there could also be a stochastic pertur- 
bation of the current. 

This lecture on control theory first treats a deterministic optimal 
control problem with no constraints on the controls. It is first solved 
by transforming the problem into a boundary-value problem for an ordin- 
ary differential equation, the so called indirect approach; it is then 

solved by the direct method developed by Bellman, the method of dynamic 

4 
programming. The big contribution of modern control theory to the de- 
terministic control problem has been the extensions to controls with 
constraints, and a discussion of constrained controls constitutes an- 
other major topic of the lecture. Still another important area is the 
development of the theory of stochastic processes necessary in the 



4 ,, „ 

Bellman, R. E. and S. E. Dreyfus, Applied Dynamic Programming, 

Princeton University Press, Princeton, N.J. (1962). 



treatment of stochastic controls. Finally, the theory of Kalman-Bucy 
filters is given and their solution to the stochastic control problem is 
presented for linear systems. 

THE OPTIMAL CONTROL PROBLEM 
In these lectures the simplest optimal control problem considered 
is that of a state variable x(t) and a control variable u(t) defined on 
an interval 0<t<T. The process being controlled is described by the 
dynamic equations 

x(t) = f(t, x, u) (1.1) 



with 



x(0) = x Q (1.2) 

The vector f_ is twice continuously dif f erentiable with respect to x and 
Lipschitz continuous with respect to u; this latter condition means 
simply that there is a constant L such that for every pair of control 
vectors u. and v 

|f_(t, x> u) - j[(t, x, v) | < L|u - v| (1.3) 

For each control vector u., these conditions imply that the state 
vector x> which is obtained from solving (1.1) and which also satisfies 
the initial condition (1.2), exists and is unique. Moreover, from among 
the set of control vectors, it is assumed that there is a unique control 
u which minimizes the cost function C . The cost function is defined by 
the following: 



T 
'0 



C T [u] = G(x T , T) + J !■■(. , ,t) ,i '' '.) 



The functions F is twice continuously dif ferentiable with respect to x 
and Lipschitz continuous with respect to _u; G represents the cost at the 
terminal point x(T) = x_; it is twice continuously dif ferentiable with 
respect to X . 

Suppose that v is an optimal control vector, and consider a slight 
deviation 6_u of this control vector. If 

u(t) = v + 6u 

u_(t) is also a control vector, as can be seen from an application of the 
theory of ordinary differential equations. If z^ is the state vector 
associated with the control v, the new control _u yields a new state 
vector x given by 

x(t) = _z + 6x 

where 6x is an unknown. Moreover, since v minimizes the cost function, 
the new cost function is greater; 



j F(a, x, u) da + G(x ,T) 
J 



I 



F(o, z, v) da + G(z ,T) (1.5) 



Since the old state vector satisfies 



Z = f(t, z, v) 



and the new one satisfies 



x = f (t, x, u) 



then 



x + 6x^=x^=f^(t, z_ + Sx, v + 6u) 

Now by assumption f_ is twice continuously dif f erentiable with respect to 
2c; hence 

6x = f^(t, z^ + 6x, v + 6u) - jf (t, z_, v) 

= f 6x + f(t, z, v + 6u) - f_(t, z, v) + 0(|6^x| 2 ) (1.6) 

It is not necessary that 6_u be uniformly small; indeed, in problems 
involving bang-bang controls, this is not at all true. However, there 
can be deviations 6u of order one only if their duration is short. It 
can be proved that if 6_u satisfies the condition 



|ou(cr)| do < e (1.7) 

J 



then the deviations 6x(t.) are also of order e. Since by assumption, f_ 
is Lipschitz continuous with respect to u., 

\i_(t, z, u) - f(t, _z, v)| < L|u - v| = 0(6u) 

Moreover, it follows from Equation (1.6) that to the same order of 
approximation 



6x = f 6 x + f_(t, z, u) - f (t, z^, v) (1.8) 



or in abbreviated form 



Sx = f 6x + f(u) - f(v) (1.9) 

— — x — — — 



This equation is a linear differential equation for 6x, and there 
are standard ways for solving linear differential equations. One first 
considers the linear homogeneous equation 

± = Ay (1.10) 

where in our case y_ represents the vector 6x and A the matrix f . Let 

y j (t) = (SyCt, t), * 2j (t, T) $ nj (t, T)) 

be the solution of Equation (1.10) with $. . (x, x) = 6.., the Kronecker 
delta; moreover, let $(t, x) be the matrix whose column vectors are the 
vectors yj' , $(t, x) = $..(t, x) . The matrix $(t, x) is called the 
transport matrix or fundamental matrix for the differential Equation 
(1.10). From (1.10) it follows that as a function of t 



|| (t, X) = A$(t, X) (1.11) 



and by its definition 



'(x, X) = I (1.12) 



where I is the unit matrix. The solution y(t) is given in terms of its 
value at t = X by 

y(t) = $(t, X) v_(T) (1.13) 



Coddington, E. A. and N. Levinson, "Theory of Ordinary Differential 
Equations," McGraw-Hill Book Company, Inc., New York (1955). 



Hence 

I Z(t) = y(t) = $(t, T) y(T) 

= $(t, T) $(t, t) y(t) 

or if J ^ 0, 

I = <I>(t, T) <J>(T, t) (1.14) 

Differentiating with respect to t yields 

= || (t, T) *(T, t) + $(t, T) |^ $(T, t) 

= A $(t, T) *(T, t) + $(t, T) |- *(T, t) 

at 

= A(t) + *(t, T) !^ $(T, t) 

It can be shown that $(t, t) has an inverse and that this inverse is 
$(t, t) ; consequently 



|- $(t, t) = - $ 1 (t, t) A(t) = - $(t, t) A(t) 
dt 



that is, $(t, t) as a function of T satisfies 



4- $(t, T) = - *(t, T) A(t) (1.15) 

dT 

Although (1.15) will be used subsequently, of immediate interest is the 
solution to the inhomogeneous linear equation 



y = A y +£(t) (1.16) 



10 



with y(x) = 0; the solution is given by 



J «(t. 



y(t) = J $(t, o) g(a) da (1.17) 

T 

which can be verified by substitution into (1.16). For the control 
problem, (1.17) has two consequences: it can be used in conjunction 
with (1.6) to obtain an estimate for the order of magnitude of 6_x and 
it can be used to solve (1.9). 
In the first case, 

I 1 (i r(u) - l"(v) d j 0(6x 2 ) do 



i |*(t, a) I I f(u) - f(v)| do + | 0(6x' 



I | f (u) - f (v)|do + J 0(6x 2 ) do 



,t 
< M 




where M is a bound for $. From (1.3) 



5x| < LM J |Su| do + J 0(6x ) do 




♦ r- 





< LMe + 0(6x ) do 



By iteration 



• r- 



: LMe : 0(e 2 ) do = 0(e) 





The second case is of more interest, of course, for it gives an 
approximation of 6x good to the second order in e, namely, 

6x = *(t, o) [f(u) - f(v)] do (1.18) 




11 



where $ is defined by 

ff (t, T) = 4(t) *(t, T) (1.19) 

Now consider the difference in the values of the cost function; by 
(1.5) 

F(a, x, u) - F(a, z, v) da + G(x T) - G(z , T) > 
J Q T T 

Hence, from the assumptions on F and G, 



| [F 6x + F(u) - F(v)] da + G 6x(T) > 

■A. X — — -x- — — 

i [F x (T > 1 




By (1.18), 

,T 

»(x, a) (f(u(a)) - f(v(a)))da + F( u (t)) - F(v(x))] dx 



+ G j $(T, a) (f(u(a)) - f(v(a))) da > (1.20) 
x "b 

If the order of integration in the double integral is changed, 
.T 

a) [f(u(a)) - f(v(a)) dadx 



J F (x) J $(T, 


T T 

= J J F x (t)$(t, a) dx [f(u(a)) - f(v(a))] da (1.21) 



a 



T 
The vector function p_ is defined by 

„T 



F (x) $(t, t) dx - 
J x 



^ (t) = - J F^(x) $(t, t) dx - G v (T) $(T, t) (1.22) 

t 

Recall that one of the properties of $ was (1.15) 



fr (x, t) = - $(t, t) f (t) 



12 



Then 



f 

p T = F (t) $(t, t) - | F (T) |r *(T, t 
— x J x ot 

;t) + i F (T) *(T, t) f (t) dT + 


(t) - - J F (t) *(T, t) dx - G x (T) $(T, t) 



= F (t 



) dx - G (T) ■§- *(T, t) 
x dt 



G (T)O(T, t) f (t) 



= F 



■T T . 
£ = F „ " P I. 



f (t) 



(1.23) 



T T 

with p (T) = G (T). In terms of p , (1.20) becomes 






[- P (a) (f_(u) - f.(v)) + F(u) - F(v)] da > 



J [- F(v) + p T f(v)] - [- F(u) + p T f(u)] do > (1.24) 







Since 6u is an arbitrary deviation satisfying only (1.7), it can be 
chosen such that u = v everywhere except on some arbitrary interval; as 
a consequence, the inequality in (1.24) must hold for the integrand: 

- F(v) + p T f (v) > - F(u) + p T f(u) 



Define 



H(t, u) = - F(u) + p f (u) 



(1.25) 



Then H satisfies 



H(t, v) > H(t, u) 



(1.26) 



13 



for v, an optimal control. This is the Pontryagin maximal principle 

T 
which states that for given values of p_ and x at time t, the optimal 

control v(t) is the control function for which the Hamiltonian H(t, u) 

is a maximum. 

If the control functions are sufficiently smooth, the optimal 
control is that control for which 

= |S= - F + P T f (1.27) 

du u u 



It is assumed that f is dif ferentiable with respect to u_; prior to this 
equation, f need only be Lipschitz continuous with respect to u. This 
equation is a system of m equations which could be solved for the 

m control functions (u n ,...,u ) in terms of the state variables 

1 m T T 
(x-,...,x ) and the new variables (p , . . . ,p ). Consequently, the 

optimal control problem has been reduced to a two-point, boundary-value 

problem for an ordinary differential equation: 



x = f(t, x, u) 



• T T7 T f 

p = F - p f 



= - F + p T f 



9p T 



9H 



0=|» 
du 



(1.28) 



14 



where 



x(0) 



p(T) = G x (T) (1.29) 

T 
There are just enough conditions to determine x, p_ , and u. 

T 
The function H contains the variables x, £ , u, and of course t. 

Using (1.26) to eliminate u, (1.28) can be expressed in terms of the set 

T 
of dual variables x and £.'=£.> where the prime denotes transpose of 

the vector; the resulting system is the familiar canonical form of 

classical mechanics. 



3H 



P = - I 1 (1.30) 

— dx 



The boundary conditions are stated in terms of x„, x. , and T; for 
instance, both x T and T might be fixed, or either one might vary while 
the other is fixed. No boundary conditions are specified directly in 
terms of p_; the boundary conditions on p_ are obtained indirectly by 
substitution into (1.29). Equation (1.29) does, however, contain a 
sufficient set of conditions to pose a two-point, boundary-value problem 
for (1.30). 

Another form that the boundary condition at t = T might assume is 
for x and T to satisfy an end condition of the form 

M(x T , T) = (1.31) 

where M is a twice continuously dif ferentiable vector function of both 
its arguments. In this case the method of Lagrange multipliers will be 
used to transform the optimal control problem into a corresponding 



15 



two-point, boundary-value problem. The vector q is introduced here as a 
Lagrange variable. Now the problem of minimizing the cost function (14) 
is replaced by the problem of finding the unconstrained minimum of 

f T 

C (u) = J F(a, x, u.) + £'(x - f_(a, x, u) da 
q 

+ U'M(x T , T) + G(x T , T) (1.32) 

The boundary condition (1.31) has been inserted into the cost function 

by means of the Lagrange multiplier y_. Suppose v(t) is the control 

which minimizes C . For a variation 6u to the control v, let x(t) 
q - — — 

denote the new state variable, and let t = T+AT be the time at the new 

terminal point. The main difference from the previous argument in this 

section is that the terminal time is T+AT rather than AT. The new cost 

is given by 

T+AT 

C (u) = F(a, z^ + 6x, v + 6u) + £* (z + 6x - f (a, z + 6x, v + 6u) da 
q 

+ G(x(T + AT), T + AT) + y'M(x(T + AT) , T + AT) 



Hence the increase in cost C (u) - C (v) is given as: 

q — q — ° 

C (u) - C (v) = [F 6x + F 6u + q'6x - q'f 6x - q'f 6u] da 



q - q 







+ G x Ax T + G T AT + y ' (M x Ax T + M T AT) 



,T+AT 
+ J F(a, x, u) + £'x - £'f_(a, x, u) da 

T 



where x = z + 62 



f T d6 ^ r T 

J q' -t— dt = q'(T) 6x(T) - q ' 6x da 
~ dt ~ _ _ " 



16 



Hence 




! r 

>i J 



{[F - q' - a'f ] 6x + [F - £'f ] 6u} da 



+ q' (T)Sx(T) + G x Ax T + G^T + y' (M x Ax T + M^T) 



♦ / 



T+AT 



F(a, x, u) + q/x - q_'f(a, x, u) - F(T, x u ) 



•T -T' 



- £'x(T) + q*f (T, x T , u T ) da 

+ [F(T, x T , u T ) + £'z T + q'6x(T) - q'f(T, x T u,,) ] AT (1.33) 

The integral from T to T+AT is a second order contribution which goes to 
zero faster than the other terms as AT ->■ 0. 

In order to determine Ax , consider the solutions of the differ- 
ential Equation (1.1), which have the initial value x.. These solutions 
satisfy the integral equation 



Then 



:(t) = Xq + 



Ax T = (x(T) - z(T) 



r 

J !( a > E> u) dCT 


-AT 
) + J f(a, x, H 



) da 



= 6x(T) + f AT + 0(e ) 
where the geometry of the proof is illustrated in Figure 1. 




T T+AT 
Figure 1 — Geometry of the Proof 



17 



To within second order 



6x(T) = Ax T - f AT (1.34) 



Within this order of approximation, (1.33) reduces to the following: 

T 

< I [F - q" - q'f ] 6x + [F - q'f ] 6u da 
— -L x — -x — L u — — u — 

+ [q'(T) + G + y'M ] A x T 
— x — x — i 

+ [-£' (T) f + G T + y'M T + F(T, x T> u. T ) ] AT 
If q is now determined so that the coefficient of 6x vanishes, 

q' = F - q'f 
— x — x 

This is the same differential Equation (1.23) that p_ satisfied; our 
Lagrange multiplier can then be identified with p_ 

q=£ (1.35) 

Moreover, since the relationship must hold independent of 6_u, 

F - p'f =0 (1.36) 

u u 

Since there are no longer restrictions on Ax and AT, 



p' (T) + u'M + G =0 
— x x 



F + G T + y *M T - p'f = 



(1.37) 



Introducing the Hamiltonian (1.25) yields 



18 



9H 
3p 



g d.28) 



o-|5 

du 



The initial condition x(0) = x^ together with the terminal conditions 

M(x T , T) = 
p* (T) = - (y'M x + G x ) (1.38) 

H(T, u(T)) = G T + y'M T 

provides a sufficient number of conditions to determine x> £.» u, and T. 
The last two equations in the system (1.38) are obtained from (1.37). 

The problems of optimal control theory generally reduce to a two- 
point, boundary-value problem for the system of ordinary differential 
equations (1.30). Bailey, Shampine, and Waltman discuss methods for 
solving such two-point, boundary-value problems. These problems are 
presently solved either by the shooting method or by solving a sequence 
of simpler boundary value problems whose solutions converge to a solu- 
tion of the given problem. In any case, very few of these problems can 
be solved without the use of electronic computers either digital or 
hybrid. 

The shooting method is the easier, when it works. It consists of 
supplementing the conditions at one end with a sufficient number of 
assumed conditions to yield an initial value problem. The initial value 



Bailey, P. B. , et al., "Nonlinear Two-Point, Boundary-Value Problems,' 
Academic Press, Inc., New York (1968). 

19 



problem is solved; the solution is substituted into the boundary con- 
ditions at the other end. If these conditions are satisfied, the solu- 
tion to the initial value problem is the desired solution to the two- 
point, boundary-value problem; otherwise, a new set of assumptions is 
made based on the discrepancy between the actual boundary values and the 
calculated values. Hopefully, as one continues this iteration process, 
the solutions to the initial value problem converge to a solution of the 
two-point, boundary-value problem. The shooting method may not con- 
verge, or it can be unstable, that is, a small variation in the initial 
conditions results in a large variation in the solution. If the initial 
problem is unstable, a small error, such as roundoff on a computer, 
could cause subsequently computed values at another point to be meaningless. 
Before proceeding to the direct method for solving the optimal 

control problem, take a second look at the Hamiltonian H and the func- 

T 
tions p_ . Suppose that the terminal cost G is identically zero; the 

cost function is then 

T 

C(u) = ! F(a, x, u) da 




Further, assume that every point in an open neighborhood N of an optimal 
trajectory _z(t) can be joined to the initial point (0, jO °y a trajec- 
tory x(t) resulting from an optimal control. This assumption makes the 
minimal cost J a function of the terminal point (T, x T ) i- n N. 



J F(G, x, u) 



J(x , T) = Min J F(G, x, u) da (1.39) 




It is assumed that J is twice continuously dif ferentiable. Then, 



J(x T + Ax T , T+AT) = J(x T , T) + J x Ax T + J T AT (1.40) 



20 



By the definition of J, there is a control u + Su together with a 
trajectory x + &* such that 

J(x + Ax , T+AT) = J F(G, x + 6x, u + 5u) da 

(1.41) 



where u is the control such that 



T 

1 



J(x T , T) = J F(a, x, u) da (1.42) 
1 



From (1.41) and (1.42) 



J(x T + Ax T , T+AT) - J(x T , T) 



J F(a, x + 6x, u + 6u) - F(a, x, u) da 


.T+AT 

+ J F(a, x + 6x, u + 6u) da 
T 



(F 6x + F 6u) dt + FAT 
x _ u " 



Now from (1.23), the equation f or j) is 



F = £' + £'4 



Hence 



J(x T + Ax T , T+AT) - J(x T , T) 

f T 

= J [(£' + p'f ) 6x + F 6 H] da + FAT 

= p ' (T) 6x(T) + I £' (f 6x - 6x) + F 6u da + FAT 
x 

21 



From (1.9) 



Hence 



f 6x - 6x = - f 6u 
— x — — — u — 



J(x T + Ax T , T+AT) - J(x T , T) = p'(T) (Ax_ T - fAT) 



. r 



(F - p'f ) 6u da + FAT 
U ~^ ~ 



where use has been made of (1.34). But by (1.27), F p'f = 0; so, 

u u 

J(x T + Ax T , T+AT) - J(x T , T) = p'Ax T + (F - p * (T)f) AT 
= p'(T) Ax T - HAT 
= J x Ax T + J T AT 

where the last equality results from (1.40). This gives 

J x = P' (1.43) 

and 

J T = - H (1.44) 

In the space of variables (x , T) , the vector p_' is the gradient of 
the function J; it is normal to the surfaces of constant J; H is the 
Hamiltonian of the function J. This sheds new light on the maximal 
principle. Along an optimal trajectory, the change in cost J over a 
given time step AT is a minimum, that is, H is a maximum. 



22 




OPTIMAL 
TRAJECTORY 



J+AJ 



Figure 2 — Constant Cost Fronts 

These arguments hold only if the terminal cost is zero; G = 0. 

RELATION TO DYNAMIC PROGRAMMING 
The partial differential equations (1.43) and (1.44) can be ob- 
tained by the method of dynamic programming. This method is based on 
the Bellman principle of optimality. According to the Bellman prin- 
ciple, an optimal control policy has the property that, regardless of 
the initial state or initial decision, the remaining decisions must 
constitute an optimal control policy with regard to the state which 
results from the first decision. 

In terms of the cost function 



C(u) = J 



F(a, x, u) da 



the Bellman principle takes the form. 

The cost C(u) is a minimum along a curve x defined on [0, T] if it 
is a minimum along each later part of the curve, that is, if 



Dreyfus, S. E. , "Dynamic Programming and the Calculus of Variation, 
Academic Press, Inc., New York (1965). 



23 



J" 



T 

F(a, x, u) da 



is a minimum along the curve x. on the interval [t, T] for all te[0, T]. 

The integral is dependent on the end point (t, x(t)). If one 
defines 

r 

J(x, t) = min J F(a, x, u) da (2.1) 

u t 



for all admisible controls u, then 

I" 1 

J(x, t) = min {F(t, x, u) 5t} + min J F da 

u u t+6t 



J(x, t) = min {F(t, x, u) 6t + J(x + 6x, t + 6t)} (2.2) 



This equation forms the basis of the direct methods for solving control 

7 8 
problems, described by Dreyfus. Larson extended the direct methods to 

constrained problems. 

If it is assumed that J has partial derivatives, the differential 

equations (1.43) and (1.44) can be obtained from (2.2). Hence, the 

boundary value problem for the optimal control is obtained. If the 

partial derivatives of J exist, the right-hand side of (2.2) can be 

expanded in a Taylor series: 

J(x, t) = min {F6t + J(x, t) + J (x, t) 5x + J (x, t) 6t} 
u 

(2.3) 



Larson, R. E., "State Increment Dynamic Programming," American 
Elsevier Publishing Company, Inc., New York (1968). 



24 



From the differential equation (1.1) 



Hence 



5x = f6t 



= min {F + J f + J } 6t 
x— t 



Since St > 0, 



= min {F + J f + J } (2.4) 

x— t 



In order to find the minimum of the term in brackets, it is differ- 
entiated with respect to _u and the result is set equal to zero. This is 
a necessary, but not sufficient condition; however, if one assumes a 
minimum, it serves the purpose. 



+ J fu = (2.5) 

x— 



By (2.4) 



Hence 



F + Jf+J =0 (2.6) 

x— t 



H = 



J =-H = -(F + Jf) 
t x 



(1.43) 



From (1.25) 



J = p T (1.42) 



25 



There is a difference between the definition of H here and its 
definition in the previous section. This is only an apparent difference 
in the sign of F, which occurs because the lower limit of the integral 
is used in the definition of J here rather than the upper limit as used 
earlier. Otherwise there is complete agreement with the results of the 
indirect method. 

CONSTRAINTS ON THE CONTROL AND STATE VARIABLES 
In most applications, the control or the state variables cannot be 
chosen arbitrarily but are subject to constraints. In the problem of a 
ship moving in a current, ship speed is limited by the maximum power 
available. The constraints can generally be expressed in terms of 
inequalities of the form 

£(x, u) < (3.1) 

where the vector inequality simply means that the components satisfy the 
inequality. The number of components in the vector _£ is the number of 
constraints on the system. The analysis does not depend on whether both 
x and _u occur implicitly in the inequality; one can have constraints on 
the controls and not on the state of the system or vice versa without 
affecting the analysis. 

In this presentation, the variables in the optimal control problem 
with constraints are the state variable x(t) and the control variable 
_u(t) defined on an interval <^ t £ T. The process being controlled is 
described by the dynamic equation (1.1): 

x(t) = f(t, x, u) 

with initial condition x = x ; the state and control variables are 
— — c 

constrained by the inequality (3.1). For simplicity, the terminal cost 
is taken as zero, G = 0, and the cost function is given by the equation: 

,T 



,(u) = J 



C (u) = J F(a, x, u) da (3.2) 





26 



The vector _f and the cost function F are twice continuously dif ferentiable 
with respect to x and continuously dif ferentiable with respect to u. 

The Lagrange multipliers will be used here to reduce this problem 
to a two-point, boundary-value problem. As in (1.32), the differ- 
ential equation is introduced into the cost function by means of a 
Lagrange multiplier p_. 

f T 

C (u) = F(o, x, u) + p'(x - f(a, x, u)) da 

p - J Q 

which yields the variational equation 

f 1 

) [F 6 +F6u+p'6x-p'f6x-p'f 6u] da 
J x— x u— — — — ^x — — — u — 



. r 



p_'(T)6x(T) + I [(F - £' - p'f ) 6x 


+ (F - p'f ) 6u da > (3.3) 

u u — — 



The differential in the cost is greater than or equal to zero since 
it is assumed that the variation 6u is around an optimal control, a 
control which minimizes the cost. 

Because of the constraint (3.1), the vector 6u is not free. 
For instance, suppose that for t between \. and t„, the trajectory z_(t) 
due to the optimal control v(t) is along the boundary of the allow- 
able region; see Figure 3. One cannot freely choose the variation 6u_ 
in the control vector for t < t _< t„ and still expect to remain in the 
allowable region R. 

For the optimal trajectory z^ and control v, there are at most a 
finite number of intervals t < t < t + 1 such that equality holds for 
any of the equations in (3.1).* On such an interval, the conditions 
(3.1) can be split into two sets 



*The proof of the statement is topological and beyond the scope of 
these notes. 

27 




z(t) 

x(t) = Z + 



Figure 3 — Constrained Variables 



and 



^(z, v) = 



,(z, v) < 



(3.4) 



where ^ = ($>_ £ ) . 

Consider a new vector ^ defined by 

^(x, u) + _^(x, u) = 

The vector ip is called a defect vector. Along the optimal trajectory, 
the vector ^_ can also be split into two component vectors, _^ and _^„ , 
which correspond to the component vectors of $>_. The component vectors 
of \\i also change from interval to interval. Along a given interval 

[t k> C k + 1 ] 



28 



4 = o 

(3.5) 
± 2 > ° 



Since ^ (z^ v) is zero on this interval, either jz, v, or both are 
on the boundary of their allowable range. From previous arguments, it 
is known that one cannot freely choose 6u_. Only those values of &u are 
allowed which satisfy 

A (z + fix, v + 6u) £ 
or by (3.4) 

4Li (z + fix, v + 5u) - j>_ (z, v) £ 

On the other hand, for a neighboring trajectory to _z 
A ( z^ + fix, v + 5u + _^ + _6J) = 

on [t k' hc+i^ since i = - i 

<j>(z + fix, v + 6u) - <J>(z_, v) + 6^ = (3.6) 

In order that the above inequality and (3.6) hold, 

6^ > (3.7) 

Moreover, provided the variations are sufficiently small, 6^„ is free. 

If _<£ is twice continuously dif f erentiable, then it follows from 
(3.6) that 

<j> fix + cj> 6u + &\l) = (3.8) 



29 



Set Su = (611-. , 6u„) and consider the first N, equations in (3.8), 
— — i — I (p 

N^ = dim (^). 



If the square matrix (J> is not singular, its inverse y exists, and 

6u x = - Y i lu2 6u 2 - Y 4 lx 6x - y 6^ 

The vectors 6_u and 6x are free; the vector 6_^ satisfies (3.7). If 

the matrix <t> n is singular, the first N, constraints were de- 

1 *1 

pendent; eliminate the dependent constraints and start again. 

The contribution to the cost differential (3.3) from the 

interval t, < t < t, is the following integral: 

r fc k+l 

\ - J { ^ ~ P-' " £'4 " V Y i lx " £'f Y 4x ] 6 ^ 

fc k X X 

+ [(F Uz " P'^ - F^ - P'^) Y4 1U2 ] ^ 

- (F -p'f ) y 61 } da 
u J u 1 -"-1 



Define the vector X by 



X' = - (F - p'f ) Y ( 3 - 10 ) 

-1 u x -^ 



Then 



\ = 



C k + 1 



C k 



{[F x -p- - P'f x + Ali lx ] 



+ r^-p'i^+AIi^] 6u 2 + xi6V da 



30 



The vector £ can be determined so that the coefficient of 6x vanishes: 

p' = F - p' f + X'j. (3.11) 

— —x — ~~x — 1— lx 



Since 6u„ is free, the usual argument that Su is zero everywhere except 
on a small interval yields 



F - p' f + X' 4, =0 (3.12) 

u 2 * -u 2 -1 -lu 2 



Now 6u can be chosen so that 6u = for t < t, and for t, , , < t. 
— — — k k+1 — 

In this case, the only contribution to the cost difference (3.3) is that 

due to I, ; hence 
k 



< I, = A' 6^i da 



By (3.7), ty > 0; so 



A > (3.13) 



Let the Hamiltonian be defined by 

H=-F+p' f - X' £ (3.14) 



where A is defined by 



A. > if *. = 
A. = if 6. < 



31 



The differential system (1.28) also holds for this H, that is, 

3H 

p=-|? (1.28) 



8x 



du 



One example of a constrained control problem is that of a forced 
harmonic oscillator in which the magnitude of the force is limited. In 
this problem, the force is the control and the process is one of chang- 
ing the velocity and displacement of the harmonic oscillator. It be- 
comes an optimal control problem if one is interested in finding the 
force or control which reduces the oscillator from a given velocity and 
displacement to zero velocity and displacement in minimum time. 

The equation of motion for the forced harmonic oscillator with a 
limited force is simply 



^f + cz = F 
dt 



where |f| _< M, a given constant. Set x = cz/M, x = tot, and u = F/M 
where co = /c/M. In terms of these nondimensional variables, the non- 
dimensional form of the equation of motion is 

x + x = u (3.15) 

where the control function satisfies the inequality |u| _< 1. This 
constraint can also be written in the form 



32 



^(u) = (u - 1) < 



(3.16) 
(u) = - ( u + 1) < 



The optimal control problem can be formulated in the phase plane. 
If (x, y) are the phase plane coordinates, the equation of motion (3.15) 
takes the form 



(3.17) 
y = - x + u 

Starting the oscillator at a given displacement with a given velocity is 
equivalent to assigning a given point (x, y) = (a, b) in the phase plane 
as an initial condition for (3.17). The rest state of the oscillator is 
represented in the phase plane by the point (0, 0), the point of zero 
displacement and velocity. Hence, the optimal time control problem is 
one of finding a control u which minimizes the time between states 
(a, b) and (0, 0). In this problem, the cost is given by 



-f 



C T (u) = T = J dT (3.18) 

1 



The cost function F(x, x, u) = 1. 

Set p = (p, q). Then the Hamiltonian defined by (3.14) is 

H = - 1 + py + q(u - x) - X(u - 1) (u + 1) (3.19) 

and, moreover, (1.28) takes the form 



33 



3H 
X = 8? = y 

9H 



o. = - x + u 

3q 



= q 



9H 
8x 

3H 



= |^ = q - A(u - 1) - A(u + 1) (3.20) 



Suppose u is an optimal control which reduces the oscillator from 
the state (a, b) to the state (0, 0) in the minimal time T, and suppose 
|u| < 1 for the interval T < T < T . Suppose q + on T < T < T . 
By (3.20), q - 2Au = 0; hence, A i on (T , T ). A consequence of 
A i is that (f) = 0; hence, if q ± 0, it follows that |u(t)| = 1 on (T , T ) 
In other words, one needs to look only for the optimal control among 
those controls for which |u(t)| = 1. 

Now |u| =1 implies u = + 1; hence, the solution of (3.20) is 
given as: 

x + 1 = A sin (t + a) 
y = A cos (t + a) 
p = B sin (t + a ) 

q = B cos ( T + a„) 

q = 2 A u (3.21) 



34 



Since A > 0, it follows from the last of these equations that the sign 
of q is the same as the sign of u. Hence, if q changes from positive to 
negative, the optimal control must switch from +1 to -1. It switches 
from -1 to +1 if q changes from negative to positive. 

In a neighborhood of the origin, the optimal trajectory satisfies 

2 2 
(x + 1) + y = 1 

Hence, its final segment is either on the circle of radius 1 about 
(-1, 0), or it is on the circle of radius 1 about (1, 0); see Figure 4. 
Suppose for the sake of argument that there is an £ > such that 

u(t) = - 1 for T - £ _< x < T. The last segment of the optimal trajector 

2 2 
is on the semicircle { (x + 1) + y = 1, < y}.. 

Between (0, 0) and (-2, 0), the parameter T would change along this 

semicircle by the amount tt; hence, the sign of q must change somewhere 

on this semicircle. At the point S 1 where q changes sign, the sign of u 

must also change, and u switches from -1 to 1. The optimal path continues 

backward on the circle of radius r around (1, 0) until either (a, b) is 

reached or q changes sign. But q does not change sign until the point 

S„ is reached since the time between S and S is tt. At S , the control 

would switch to -1 and the optimal trajectory would continue back on the 

circle of radius r„ around (-1, 0). This process is continued until the 

point (a, b) is reached. In the process, one switches control each time 

one of the following semicircles is intercepted: 

[x - (2n - l)] 2 + y 2 = 1, y > 0, n=0,l,2,... (3.22) 



[x + (2n - l)] 2 + y 2 = 1, y < n=0,l,2,... (3.23) 



The curve formed by these semicircles is called the switching curve; see 
Figure 5. 



35 



(a, b) 




Figure 4 — Optimal Trajectory 




Figure 5 — Switching Curve 



36 



The optimal control and the resulting trajectory in the phase plane 
can now be obtained by reversing the above procedure. If (a, b) is 
above the switching curve, proceed with the control u = - 1. The 
optimal trajectory will be along the circle 

2 2 2 2 

(x + 1) + y = (a + 1) + b 

in the direction of that part of the switching curve which lies to the 
right of x = 0. For (a, b) on the switching curve, use u = - 1 if 
x < or u = 1 if x > 0. If (a, b) lies below the switching curve, 
start with u = 1 and change to u = - 1 at the switching curve. 
Change the sign of u at each intersection with the switching curve. 
When u = 1, the optimal trajectory lies on a circle with center at 
(1, 0); when u = - 1, it is on a circle around (-1, 0). 

Suppose only one switch in u is needed to reach the origin from 
(a, b) . Because of the symmetry of the problem geometry in the phase 
plane, it is necessary to consider only those cases for which a = 1 
after the switch. The origin is then approached along the trajectory 

x = 1 - cos (T - t) 

y = - sin (T - t) (3.24) 



2 2 
which is on the semicircle {(x, y) | (x - 1) + y = 1, y <_ 0} let 

T be the time at which the switch occurs. The optimal trajectory 
s 

for T < T is given by 
— s 



x = - 1 + A sin (t + a) 

y = A cos (T + a) (3.25) 



37 



where A and a are constants defined by 



A sin a = a + 1 



A cos a 



By (3.24) and (3.25), the switching time must satisfy 

1 - cos (T - T ) = 1 + A sin (x + a) 
s s 

- sin (T - x ) = A cos (x + a) 
s s 

Elimination of T from these equations yields a relationship between the 
terminal time T and the initial point (a, b) , namely, 

(a + 1 + cos T) 2 + (b + sin T) 2 = 4 (3.26) 

By definition, time fronts are the curves which connect initial 
points having the same terminal time T. Equation (3.26) can be used to 
determine the time fronts for T <_ u . If T = 0, the time front is simply 
the origin; if there are no switches in the control, the initial point 
is an endpoint of the curve connecting all initial points from which the 
origin is reached with one switch in time T. More than one switch would 
require T > ir. From (3.26), the time fronts for < T < it are segments 
of the circle of radius 2 around the point (-1 - cos T, - sin T) ; see 
Figure 6. It is the segment of the circle which lies above the switch- 
ing path. At the switching path, the time front is tangent to the 
vertical line x = constant for x > 0; at the opposite end, it is tangent 
to the switching curve. For T = tt, the time front is a circle of radius 
2 around the origin. 



38 




Figure 6 — Time Fronts 



39 



STOCHASTIC SYSTEMS 
Stochastic control theory was first applied in this country at the 
Massachusetts Institute of Technology during World War II to synthesize 
fire control systems. In the 1960's it was applied to space navigation, 
guidance, and orbit determination in such well-known missions as Ranger, 
Mariner, and Apollo. Applications of the filtering theory, aspects of 
control theory include submarine navigation, fire control, aircraft 
navigation, practical schemes for detection theory, and numerical in- 
tegration. There have also been industrial applications; one example 

involved the problem of basic weight control in the manufacture of 

1 
paper. 

The filtering and prediction theory developed by Wiener and Kolmogorov 
forms the cornerstone of stochastic control theory. It provides an 
estimate of the signal or the state of a process on the basis of observa- 
tion of the signal additively corrupted by noise. Unfortunately, the 
Wiener-Kolmogorov theory cannot be applied extensively because it requires 
the solution of the Wiener-Hopf integral equation. It is difficult to 
obtain closed form solutions to this equation, and it is not an easy 

equation to solve numerically. 

2 
Kalman and Bucy give a solution to the filtering problem under 

weaker assumptions than those of the original Wiener problem. Their 

solution makes it possible to solve prediction and filtering problems 

recursively and is ideally suited for digital computers. Basically, it 

can be viewed as an algorithm which, given the observation process, 

sequentially computes in real time the conditional distribution of the 

signal process. The estimated state of the process is given as the 

output of a linear dynamical system driven by the observations. One 

determines the coefficients for the dynamical system by solving an 

initial value problem for a differential equation. This differential 

equation is easier to solve than the Wiener-Hopf equation. 

Our attention here will be limited to linear systems with quadratic 

cost functions. In this case the solution of the optimal control 



40 



problem is given by the separation theorem. The solution consists of 
an optimal filter for estimating the state of the system from the ob- 
served data and a linear feedback of the estimated state of the system; 
see Figure 7. 





PROCESS 


OBSERVED 
DATA 




















^CONTROL SIGNAL 




OPTIMAL 
FILTER 














LINEAR 
FEEDBACK 








ESTIMATED STATE 



Figure 7 — Stochastic Control System 

The optimal filter is the Kalman-Bucy filter, which will be dis- 
cussed in detail in the next section; the linear feedback is the same as 
would be obtained if the state of the system could be measured exactly 
and if there were no randum disturbances in the system. Thus, the 
linear feedback can be determined by solving a deterministic problem. 
Because of time limitations, we will not prove but merely accept the 
separation theorem. 

One objection to the use of stochastic control theory is that the 
process to which the theory is applied may not be random but merely 
irregular. For instance, the traffic flow on the Washington Beltway may 
not be truely random but it is certainly highly irregular. If I need to 
reach Dullis Airport from DTNSRDC by 1 pm, it might take me 45 to 50 
minutes; but to reach the airport at 6 pm, I would have to allow 2 
hours. The reason for this variation in lead time is that there will be 
bumper-to-bumper traffic on the Beltway during the rush hour and any 
accident brings this traffic to a halt. It is not the microscopic but 
the macroscopic properties of the traffic flow that govern our lead time 
estimate. The traffic flow could be analyzed as a stochastic process; 
such a model would be acceptable provided it predicted the macroscopic 



41 



properties of the flow. This is analogus to using linear models in the 
deterministic case. If the predictions agree with the experimental 
results, the linear theory is said to be good; if they do not, then the 
process is said to be nonlinear. In using a statistical model, one 
should recognize that it is only a model and not the actual process, and 
one should continually strive to determine the accuracy of his models. 

There are many reasons in favor of applying stochastic theory. The 
solution of the stochastic problem may be possible whereas the determin- 
istic theory may be hopelessly impossible. In many problems such as 
that of traffic flow, one may not be interested in the microscopic 
properties but merely in certain macroscopic properties. In the control 
problem, the stochastic model distinguishes between open and closed 
looped systems but the deterministic model does not. Another reason for 
using a stochastic model may be that this model is closer to the physics 
of the actual situation. 

In any case the purpose of this section is to lay the ground work 
for stochastic control theory. Our attention will be focused on certain 
concepts of stochastic processes and random differential equations. 

To describe a stochastic process rigorously would require measure 

theory and a great deal more time. Our approach will therefore not be 

rigorous, but hopefully it will be complete enough to get across the 

9 
basic ideas. For the rigorous approach, see either Doob or Gikhman and 

Skorokhod. 

A real random variable E, is a set of numbers or events together 

with a probability measure defined on this set. It is characterized by 

its distribution function F(x) which is defined by 

F(x) = P {E, < x} 



9 
Doob, J. L. , "Stochastic Processes," Wiley, Inc., New York (1963). 

Gikhman, I. I. and A. V. Skorokhod, "Introduction to the Theory of 
Random Processes," W. B. Saunders Company, Philadelphia, Pa. (1969). 



42 



where P {E, <_ x} is the probability that E, is less than or equal to x. 
The distribution function is nonnegative, nondecreasing, and continuous 
from the left; also F(- °°) = and F(°°) = 1. 

Analogously, if E, is an n-truple of random variables, its distri- 
bution function is a function of n real variables. 

F(x r x 2 ,..., x n ) =P {q<x r ..., ?n <x n } 

and F is called a joint distribution function of the variables E, . The 
function F(x.. , x„,..., x ) is uniquely defined in n-dimensional Euclidian 

space E , is non-decreasing, and is continuous from the left with respect 

n r 

to each variable. Furthermore, 



F(x r x 2 ,...x., - oc, x . +2 ,...x n ) = 



and 



F(x r ..., x., °°,..., oo) = F (i) (x 1 ,..., X.) 



where F denotes the distribution function of the i-truple (E, , . . . , E,.). 

A random function or a stochastic process is a random variable E,(t) 
which is a function of time. As time varies, £(t) describes the evolu- 
tion of the process. If a random process is recorded as it evolves, the 
recorded function £(•) describes only one of the many possible ways in 
which the process might have developed. The recorded function £(*) is 
called a sample function of the random process. For each fixed value of 
t, the quantity £j(t) is a random variable. 

Whereas a random variable is characterized by a distribution function, 
a stochastic process is characterized by a set of joint distribution 
functions. Assume that it is possible to assign a probability distribution 
to the multidimensional random variable E, = (E,(t ), £(t„),..., £(t )) 
for any n and arbitrary times t.. The distribution function 



43 



F(x r x 2 . . . . , x n ; h t n ) = P (C( tl ) < x x S(t n ) < x n } 

is called the finite-dimensional distribution of the stochastic process 
£(t). For F to be a distribution, it must satisfy the following com- 
patibility conditions: 

F(x r x 2 ,..., x., oo,..., oo; tr ..., t n ) = F (x ls x 2 ,..., x.; ^ t n ) 

for i < n and 



F(x,,..., x ; t 15 ..., t ) = F(x. ,..., x. ; t. ,..., t. ) 

1 n 1 n j ' 3 3-, jn 

J l J n J l J 



where j .,,..., i is an arbitrary permutation of the indicies 1, 2,..., 
J l J n J r ' ' ' 

The mean value of a stochastic process is defined by 

oo 

m(t) = E[£(t)] = J x d F(x, t) 

where E is the mathematical expected value. The mean value is thus a 
function of time. Higher moments of £ are defined similarly. 
The covariance of the stochastic process is given by 



r(s, t) = cov [£(t), £(s)] = E [(£(t) - m(t)) (£(s) - m(s))] 
(x - m(t)) (y - m(s)) d F(x, y; t, s) 



oo 



Our definition of a stochastic process is very general, and 
most systems which come under this definition would be mathematically 
unmanageable. Some specialization of the theory which makes it possible 
to characterize the distribution of ^(t ), £(t„),..., £(t ) in a simple 
way are particularly attractive. For instance, if the distribution of 



44 



£(t, ) , . . . ,E,(t ) is identical to the distribution of E,(t + t) , 

5(t„ + x),...,£(t + t) for all x and all arbitrary choices of the 

times t, ,....t , then the stochastic process E(t) is said to be stationary. 
1' n r 

If only the first and second moments E[£] and E[^ ] of the distributions 
are equal, then the process is weakly stationary. 

Our discussion of control systems has been limited to systems in 
which knowledge of the system at time t together with the governing 
equations suffices to describe its future evolution. Knowledge of the 
past when the present is given is superfluous relative to the future 
evolution of the system. The stochastic system analogy of this situation 
is the Markov property for random processes; these are stochastic process- 
es in which the past and future of the processes are conditionally 
independent. In order to define a Markov process, the conditional 
probability and the transition probabilities have to be defined. The 
conditional probability P(a|b) is the probability that A will occur if B 
has occurred. Given a sequence of times t < t„ <...< t < t, the 
probability that £(t) < x if the sample function £(•) has already taken 

the values £(0, £(t_) , . . . ,£(t ) is denoted by P(£(t) < x|£(t ),..., 
L Z n — 1 

£(t )). A stochastic process is said to be a Markov process if 
n 

p(£(t) < x|^(t 1 ),..., C(t n )) = P(£(t) < x|?(t n )) 

The transition probability distribution F(x, t|y, s) is defined by F(x, 
t|y, s) = P(£(t) < x|C(s) = y) . If a stochastic process is a Markov 
process, its finite distribution functions are given by 

F(x r x 2 ,..., x n ; t r ..., t n ) = 

F( Xl ; t x ) F(x 2 , t 2 | Xl , tl )...F(x n , tjx n _ r t^) 



45 



This results from an application of the Baye rule. A Markov process is 
thus defined by two functions, the absolute probability distribution 
F(x, t) and the transition probabilities F(x, t|y, s) . 

Consider a system with the following dynamic equation: 

x = f(t, x, u) + £ w (t) (4.1) 

where ^ is a small parameter and w is a stochastic process. Since w is 
stochastic, the state of the system x will also be stochastic; thus, we 
are interested in solving stochastic differential equations. Further- 
more, our interest is not with a particular sample function x(*) which 
is a particular discription of the state of the system during one run 
through the process; our interest is with the statistical properties of 
the stochastic process x (t). 

Consider the linear stochastic differential equation 

dx = A x dt + dw (4.2) 

where w is a stochastic process. In order to make some progress in 
finding the statistical properties of x, assume that w is a Wiener 
process . 

A Wiener process is a Markov process which satisfies the following 
conditions: 

1. It is a second order process; that is, for all t 

E[w (t)] < <*> 
Hence, the mean m(t) exists as well as the covariance function 

r(s, t) = cov [w(t), w(s)] 



46 



2. The process has independent increments; that is, for arbitrary 

times t, < t„ < . . . < t , the increments 
12 n 

x(t n ) - x(t n _ 1 ), x(t n _ 1 ) - x(t n _ 2 ),...,x(t 2 ) - x(t 1 ), x(t 1 ) 

are independent.* 

3. The distribution of x(t) - x(s) for arbitrary t and s depends 
only on t-s. In this case, the process is said to have stationary 
increments. 

4. The transition probabilities are Gaussian. In the one- 
dimensional case, the transition probability density is 



p(t + At, w|t, 0) = ^^ exp - w /2At 



1_ 2 

\t 



5. w(0) = with probability one, and E[w(t)] = for all t > 0. 

Sample functions of a Wiener process have interesting properties. 
They can be continuous functions but are nowhere dif ferentiable. Their 
paths are of infinite length. Yet it is for just such perturbations 
that (4.2) will be solved. 

If w in (4.2) had bounded variation, the solution could be written 
in terms of the transport matrix <J>(x, t) of the linear system 

y = A y (4.3) 

The solution of (4.2) would be 

x(t) = $(t, 0) c + J $(t, t) d w(x) (4.4) 



where the value of x at t = is the random variable c. The expectation 
of c is m and its covariance matrix is T. 



9 
^Independent random variables are defined on page 7 of Doob. 



47 



The integral 

f 

J $(t, t) d w(t) 


is a stochastic integral. Since the transport matrix <J>(t, t) is 
deterministic and has continuous derivatives, one way of defining 
this integral is through integration by parts. 

J <D(t, t) d w(t) = $(t, t) w(t) - $(t, 0) w(0) 


- J ||- (t, T) w(t) dT 

J o 9t 

It follows from (1.15) and other properties of the transport matrix 
that 

J $(t, t) d w(x) = w(t) - $(t, 0) w(0) + ! 4(t, T) A(t) w(t) dx 


(4.5) 

The integral on the right exist for almost all sample functions since 
the sample functions of w(t) are almost all continuous. This way of 
defining the integral has the desirable feature that the integral can 
be interpreted as an integral of sample functions. It does not, how- 
ever, preserve the intuitive idea that the integral is a limit of sums 
of independent random variables nor can it be extended to the case 
where $ is stochastic. Doob gives a more formal definition of the 
integral together with detailed proofs of its stochastic properties. 
The expected value of this integral is computed as follows: 



48 



I J $(t, x) d w(t; 



= E[w(t)] - $(t, 0) E[w(0)] 



+ E 



$(t, T) A(T) w(T) 
L (3 



dT 



= m(t) - $(t, 0) m(0) + j $(t, t) A(t) m(x) dx 





Hence 



$(t, x) d w(t) = J ${t, x) d m(x) 

L o Jo 



(4.6) 



The properties of the solution of the stochastic differential 
equation (4.4) will now be investigated. Since x is a linear function 
of a normal process, it is also normal and can be characterized com- 
pletely by the mean value function and the covariance function. Since 
the expected value of the Wiener process w(t) is zero, 



E[x(t)] = $(t, 0) E[c] + E 
= $(t, 0) m Q 



J $(t, T) d w(x; 



where m„ is the expected value of the initial condition c. Hence 



(t) = E[x(t)] = $(t, 0) m 



(4.7) 



Taking derivatives yields 
dm 



TT = 77 *(t, 0) m n = A(t) <D(t, 0) m n = A(t) m (4.3) 
at at U u x 



49 



Thus the mean value satisfies the linear differential Equation (4.3). 
The covariance matrix is more difficult to compute. In order to 
simplify the calculations, assume m_ = 0; hence, E[x(t)] = 0. This can 
always be achieved by subtracting m from x. For s >_ t, 

R(s, t) = cov [x(s), x(t)] = E[x(s) x (t)] 



"! r s It" 

j $(s, t) x(t) + J *(s, a) d w(a) 1 x (t) 



= E 



= $(s, t) E[x(t) x (t)] + J 4(s, a) E[d w(a) x (t)] 

t 



= $(s, t) R(t, t) 



(4.9) 



The integral is zero since w(a) and x(t) are independent for s _> t. 

T 
Set P(t) = R(t, t) = E[x(t) x (t)]. Then P(t) is the variance and is 

therefore the function of interest. 



P(t) = E 



'(t, 0) c + 



J $(t, T) d w(x)j 
( $(t, 0) c + J $(t, a) d w(a) j 



= *(t, 0) E[c c T ] $ T (t, 0) 



+ $(t, 0) E 



I 



c I d w (a) <£> (t, a) 



. / 



>(t, t) E[d w(t) c 



(t, 0) 



+ J J $(t, t) E[d w(t) d w T (a)] $ T (t, a) 







50 



The increments of the Wiener process are independent of C; hence 

E [c d T w(a)] = E [d w(x) c T ] = 

Moreover, from the properties of the Wiener process 

E [d w(T) d w T (a)] = 

if dx and do have no parts in common; otherwise 

E [d w(T) d w T (x)] = R dT 
w 

where R is the covariance matrix of the Wiener process w. The final 

w 

expression for P is then 

P(t) = $(t, 0) T 4> T (t, 0) + $(t, x) R (t) $ T (t, x) dx 

J 

(4.10) 

A differential equation for P can be obtained from this expression 
for P simply by differentiating 

dt " [ dt $(t ' 0) 1 r $T(t ' 0) + $(t ' 0) r dF $T(t> 0) 

+ $(t, t) r (t) $ T (t, t) + ; 9 $ ^' T) R (x) $ T (t, t) 

w «L dt w 

c 

+ $(r, t) R ("i 

J „ w 



dT 



(t) ^r $ T (t, T) dT 
at 



51 



The transport matrix satisfies 



^ (t, t) = A $(t, T) 



and 



Hence 



3t 



9 * £' T) = $ T (t, T) A T 



4^ = a $(t, o) r $ T (t, o) + $(t, o) r $ T (t, o) a t 

at 



+ R (t) + ! A $(t, T) R (t) $ T (t, 


+ $(t, T) R (T) $ (t, T 

J„ w 



T 
) A dT 



4^ = A I $(t, 0) r $ T (t, 0) + $(t, T) R (T) $ T (t, T) dT 

dt ( • J Q w 

+ \ »(t, 0) r $ T (t, 0) + J $(t, T) R $ T (t, t) dT [ A 1 

\ ~ I 



+ R (t) 



Thus from (4.10) 



dP T 

^ = A P + P A + R //nN 

dt w (4.11) 



P(0) = T (4.12) 



52 



THE KALMAN-BUCY FILTER 

The solution of the optimal control problem for a linear stochastic 
system is given by the separation theorem. It consists of an optimal 
filter for estimating the state of the system from the observed data and 
a linear feedback of the estimated state of the system; see Figure 7. 
The linear feedback is the same as the feedback that would be obtained 
if there were no stochastic perturbation of the system. This section 
will develop the explicit computational schemes for solving the filter- 
ing problem. 

Suppose we have the stochastic process described in the previous 
section 

dx = A x dt + d w(t) (5.1) 

x(0) = c (5.2) 

where w(t) is a Wiener process and c is a Gaussion zero mean n-vector. 
In an actual case in which the process is realized, it is important to 
know the state of the system. It is, however, not always possible to 
measure x directly; instead, a set of quantities z(t) dependent on x are 
measured. Assume that the dependence of z on x is linear and is given 
by 

dz = H x dt + dv (5.3) 

where the perturbation v is a Wiener process independent of x. 

The filter problem can be formulated as follows. Assume that a 
realization of the output z has been observed over the interval 
< T < t. Determine the best estimate of the value of the state vector 
x at time t. It is assumed here that the admissible estimates of x are 
linear functionals F(z) of the observed output z. The criterion 



53 



for determining the best estimate is that the mean square estimation 
error be a minimum. This best estimate x(t) is dependent on the values 
of z(t) in the interval < x < t, and it can be proved that it is a 
linear combination of the values of z on this interval. 



;t) = J K(t, t) d z(t; 



x(t) = J K(t, T) d z(T) (5.4) 



Since z(l) is a stochastic variable, x(t) is a stochastic integral. 

Interpolation and extrapolation are two problems that are related 
to the filtering problem. The interpolation problem is one of estimating 
the state at some time T < t; the extrapolation problem is one of esti- 
mating it at some time I > t. This latter problem is the one which is 
of interest to the stock market investor. 

The condition that x(t) is the best estimate from among all linear 
functionals of z(t) for the state vector x in the least squares sense is 
stated mathematically as follows. For every constant vector A. and 
linear functional F, 

E[{A T (x(t) - x(t))} 2 ] < E[U T (x(t) - F(z))} 2 ] (5.5) 

where all variables have a zero mean. 

E[x(t)] = E[x(t)] = E[F(z)] = 

Now set 

x = x - x 

where x is called the minimum error vector. 

E[(A T x) 2 ] < E[A T (x + (F(z) - x)) 2 ] 



< E[(A T x) 2 ] + 2E[A T x A T (F(z) - x) ] 



+ E[(A T (F(z) - x)) 2 ] 



54 



For all A and F(z), the criterion (5.5) requires 



E[(A T (F(z) - x)) 2 ] + 2E[A T x A T (F(z) - x) ] > 



This can be true only if 

= E[A T x A T (F(z) - x)] = A T E[x(F(z) - x) T ]A 



But this implies that 



E[x (F(z) - x) 1 ] = 



for any linear combination F(z) of elements of z; hence 



E[x F (z)] = 



(5.6) 



An integral equation for the kernel K(t, t) can be derived from 
(5.6). This kernel is not a stochastic quantity, and it can be de- 
termined independent of the realization z(*)- For F(z) = z(x) - 
z(a), < a < x < t, the expression (5.6) yields 

E[x(t) (z(t) - z(a)) T ] = E[x(t) (z(t) - z(a)) T ] 



= E 



= E 



= E 



= E 



x( 



«(f" 



(s) x(s) ds + d v(s) 







K(t, r) dz(r) [ \ H(s) x(s) ds + dv(s) 



K(t, r) (H(r) x(r) dr + dv(r)) (H(s) x(s) ds + dv(s)) 
J K(t, r) H(r) x(r) x T (s) H T (s) ds dr 



a 

.t „T 



a a 



55 



I K(t, r) H(r) x(r) dv (s) dr 

J o 



+ J I K(t, r) dv(r) x 
a 

+ J I K(t, r) dv(r) dv T (s) 



T T 
(s) H (s) ds 



a 

From the properties of Wiener processes, 

.t „T -T 



E J | K(t, r) dv(r) dv T (s) = J K(t, s) R (s) ds 

a a 

where R is the covariance matrix of the process. Furthermore, 
v T 

dv(s) and x(s) are independent, so E dv(r) x (s) = 0; hence 

E[x(t) (z(T) - z(0)) T ] = J j | K(t, r) H(r) E[x(r) x T (s)] 

a I 



(5.7) 



H (s) dr + K(t, s) R v (s) } ds 
for all a and T. On the other hand, from (5.3) 

;o If h 



E[x(t) (z(T) - z(0)) ] = E 



x(t 



^ T 
(s) x(s) ds + dv(s) 



O 



J E[x(t) x T (s)] H T (s) ds 



J j J K(t, r) H(r) E[x(r) x T (s)] H T (s) dr 
a \ 



+ K(t, s) R (s) } ds 



56 



where the last equality results from (5.7). Since this equation holds 
for all a and T in the interval [0, t], 

K(t, s) R (s) = E[x(t) x T (s)] H T (s) - ! ; K(t, r) H(r) E[x(r) x T (s)] H T (s) dr 

(5.8) 

This is a nonhomogeneous integral equation for K(t, s) . Its kernel 

T T 
is H(r) E[x(r) x (s)] H (s) . Since it corresponds to a positive definite 

quadratic form, all its eigenvalues are positive and the equation has a 

solution. Unfortunately, it is not possible to calculate K(t, s) from 

T 
this equation because E[x(r) x (s) ] , the covariance of x(s), is unknown. 

A different equation for K(t, s) can be obtained from (5.8) by 

differentiating both sides of it with respect to t. 

3 K( ^ S) R (s) = f- E[x(t) x T (s)] H T (s) 
at v at 

- K(t, t) H(t) E[x(t) x T (s)] H T (s) 



- J 3 K( *» r) H(r) E[x(r) x T (s)] H T (s) dr 



By (5.1) 



Hence 



dx = Ax dt + dw 



E[dx(t) x T (s)] H T (s) = A(t) E[x(t) x T (s)] H T (s) dt 
+ E[dw(t) x T (s)] 



57 



The second term vanishes since dw(t) and x (s) are independent if 
t 2l s. This yields 

S U ll S) R (s) = [A(t) - K(t, t) H(t)] E[x(t) x T (s)] H T (s) 

OL V 



-J || (t, r) H(r) E[x(r) x T (s)] H T (s) 



dr 



Use of the integral equation (5.8) to obtain an expression for 
E[x(t) x T (s)] H T (s) yields 



9 K(t, s) 



+ K(t, t) H(t) K(t, s) - A(t) K(t, s) 



R (s) 



~r 



K( l:' r) + K(t, t) H(t) K(t, r) - A(t) K(t, r) 
at 



H(r) E[x(r) x T (s)] H T (s) dr 



(5.9) 



Set 



iKt, s) = 9 K( ^' S) + K(t, t) H(t) K(t, s) - A(t) K(t, s) 
— ot 



Then by (5.9) 



H 



t, s) = - J ±(t, r) H(r) E[x(r) x T (s)] H T (s) R^ 1 (s) dr 



(5.10) 



Since the kernel of this integral equation corresponds to a positive 
definite quadratic form, the only solution of (5.10) is 

?(t, s) = 



58 



This yields the following differential equation for K(t, s) 
9 K(t, s) 



3t 



= A(t) K(t, s) - K(t, t) H(t) K(t, s) (5.11) 



From the integral equation (5.8) for K(t, s) 



K(t, t) R (t) = E[x(t) x T (t)] H T (t) 



On the other hand, 



T. 



E[x(t) x (t)] = E 



J K(t, r) H(r) E[x(r) x T (t)] H T (t) 

J K(t, r) dz(r) x T (t) 


= J K(t, r) H(r) 


+ J K(t, 



dr 



E[x(r) x (t)] dr 



r) E[dv(r) x (t) ] 



where the second integral vanishes; hence 



K(t, t) R v (t) = E[x(t) x T (t)] H T (t) - E[x(t) x T (t)] H T (t) 
= E[(x(t) - x(t)) x T (t)] H T (t) 
= {E[x(t) x T (t)] + E[x(t) x T (t)]} H T (t) 



59 



The condition that x is the best estimate from among all linear func- 

T 
tionals of z(t) leads to the result that E[x(t) x (t)] = 0. Hence 

P(t) H T (t) = E[x(t) x T (t)] H T (t) = K(t, t) R v (t) (5.12) 
From the stochastic integral, 

r 

x(t) = J K(t, r) dz(r) 


r 


H T R _1 dz(t) + (A(t) K(t, r) - K(t, t) H(t) K(t, r)) dz(r) dt 



o : -(\ Mi ) ,I:..U) • i 9 K( ^' r) dz(r) dt 

3t 



= P 



dx(t) = A(t) x(t) dt + P H T R 1 (dz(t) - H(t) x(t) dt) (5.13) 



Since z(t) and presumably dz(t) are known, this is a stochastic differ- 
ential equation for x(t). 
Note that 

dz(t) - H(t) x(t) dt = dz(t) - H(t) x(t) dt + H(t) x(t) dt 
= dv(t) + H(t) x(t) dt 

From this expression and (5.13), we get the following stochastic 
differential equation for x 



60 



dx = dx(t) - dx(t) 

T -1 
=Axdt+dw-Axdt-P H R (dv + H(t) x) 



= A x dt + dw - P H T R dv - P H T R H x dt 



= [A - P H T R 1 H] x dt + dw - P H T R 1 dv 



with 5(0) = x(O). By the methods developed in the previous section 
for stochastic differential equations, 

P(t) = E[x(t) x T (t)] 



1 J $(t, a) 



= $(t, 0) r $ (t, 0) + $(t, a) [Q(a) 



+ P(o) H T (a) R 1 (a) H(a) P T ] $ T (t, a) da (5.14) 



where $(t, t) is the transport matrix associated with the linear 
differential equation 

4*- = (A - P H T R _1 H) y (5.15) 

dt v 



and where 



from. (4.11) 



Q dT = E[dw(x) dw T (x)] (5.16) 



4^- = (A - P H T R X H) P + P(A - P H T R _1 H) T 
dt v v 

T -1 T 
+ Q(t) + P H R H P 



61 



dP 

dt 



= A P + P 



A T - 



T -1 
P H R 



H P - Q 



(5.17) 



p(0) = r 



(5.18) 



This set of equations finishes the solution of the filter problem. 

The optimal filter is a feedback system which is described by the 
stochastic differential equation (5.13). It is obtained by taking the 

measurements z(t), forming the error signal z(t) - H(t) x(t) , and feed- 

T -1 
ing the error forward with a gain P(t) H (t) R (t) . P(t), the error 

variance, is obtained as a solution to the nonlinear Riccati-type 

equation (5.17), H(t) is a known transformation matrix, and R is 

v 

the variance of the Wiener process dv. A block diagram of the filter is 
shown in Figure 8. The variables appearing in this diagram are vectors, 
and the boxes represent matrices operating on vectors. The double lines 
which indicate direction of signal flow serve as a reminder that multiple 
signals rather than a single one are being directed. 




Figure 8 — Optimal Filter 
62 



REFERENCES 

1. Astrom, K. J., "Introduction to Stochastic Control Theory," 
Academic Press, Inc., New York (1970). 

2. Kalman, R. E. and R. S. Bucy, "New Results in Linear Filtering 
and Prediction Theory," Journal of Basic Engineering Series D, American 
Society of Mechanical Engineers, Vol. 83, pp. 95-108 (1961). 

3. Bucy, R. S. and P. D. Joseph, "Filtering for Stochastic 
Processes with Applications to Guidance," Interscience Publishers, Inc., 
New York (1968). 

4. Bellman, R. E. and S. E. Dreyfus, "Applied Dynamic Programming, 
Princeton University Press, Princeton, N.J. (1962). 

5. Coddington, E. A. and N. Levinson, "Theory of Ordinary Differ- 
ential Equations," McGraw-Hill Book Company, Inc., New York (1955). 

6. Bailey, P. B., et al. , "Nonlinear Two Point Boundary Value 
Problems," Academic Press, Inc., New York (1968). 

7. Dreyfus, S. E., "Dynamic Programming and the Calculus of 
Variation," Academic Press, Inc., New York (1965). 

8. Larson, R. E. , "State Increment Dynamic Programming," American 
Elsevier Publishing Company, Inc., New York (1968). 

9. Doob, J. L. , "Stochastic Processes," Wiley, Inc., New York 
(1963). 

10. Gikhman, I. I. and A. V. Skorokhod, "Introduction to the 
Theory of Random Processes," W. B. Saunders Company, Philadelphia (1969). 



63 



Copies 



1 US Army Waterways 
Experiment Station 
Res Center Lib 

1 CHONR 

R. D. Cooper, Code 438 

1 ONR BOSTON 

1 ONR CHICAGO 

1 ONR PASADENA 

1 USNA 

P. Van Mater 

1 NAVPGSCOL 

NROTC & NAVADMINU 

1 NAVWARCOL 

1 NRL/Lib 

4 NAVSEA 

2 SEA 09G32 
1 SEA 0331G 
1 SEA 035B 

1 NAVFACENGCOM 

1 NAVOCEANO/Lib 

1 NADC 

1 NELC 

1 NWC 

1 NUC, San Diego 

1 NUC, Pasadena 

1 NCEL/Code L31 

1 NSWC, White Oak/Lib 

1 NSWC, Dahlgren/Lib 



INITIAL DISTRIBUTION 
Copies 

1 NUSC NPT 

1 NUSC NLONLAB 

1 NAVSHIPYD BREM/Lib 

1 NAVSHIPYD CHASN/Lib 

1 NAVSHIPYD MARE/Lib 

1 NAVSHIPYD NORVA/Lib 

1 NAVSHIPYD PEARL/Lib 

1 NAVSHIPYD PHILA/Lib 

1 NAVSHIPYD PTSMH/Lib 

10 NAVSEC 

1 SEC 6034B 

2 SEC 6110 
1 SEC 6114D 
1 SEC 6114H 
1 SEC 6120 
1 SEC 6136 
1 SEC 6140B 
1 SEC 6144G 
1 SEC 6660.03/D.L. Blount 



1 AFFDL/FDDS/J. Olsen 

2 AFFDL/FYS 

1 Dale Cooley 
1 S.J. Pollock 

12 DDC 

2 COGARD 

1 COM (E) , STA 5-2 

1 Diy of Merchant Marine Safety 

1 LC/SCI & TECH DIV 

1 MARAD/Adv Ship Prog Off 

1 MMA/Tech Lib 

1 NASA AMES RES CEN 

R.T. Medan, MS 221-2 



65 



Copies 



Copies 



3 NASA LANGLEY RES CEN 

1 J.E. Lamar, Ms 404A 

1 Brooks 

1 E.C. Yates, Jr., Ms 340 

1 NASA Sci & Tech Info Facility 

1 NSF/Eng Div 

1 Univ of Bridgeport 
Prof E. Uram 
Mech Eng Dept 

4 Univ of California, Berkeley 

College of Eng, NA Dept 

1 Lib 

1 J.R. Paulling 

1 J.V. Wehausen 

1 H.A. Schade 

3 Calif Inst of Tech 
1 A.J. Acosta 
1 T.Y. Wu 
1 M.S. Plesset 

1 Colorado State Univ 
M. Albertson 
Dept of Civ Eng 

1 Univ of Connecticut 
V. Scottron 
Hyd Res Lab 

1 Cornell Univ/W.R. Sears 
Grad School of Aero Eng 

1 Florida Atlantic Univ 
Ocean Eng Lib 

1 Harvard Univ/Dept of Math 
G. Birkhoff 

1 Univ of Hawaii 

Dr. Bretschneider 



3 State Univ of Iowa 

Iowa Inst of Hyd Res 
1 L. Landweber 
1 J. Kennedy 
1 Hunter Rouse 

1 Kansas State Univ 

Engineering Exp Station 
D.A. Nesmith 

1 Lehigh Univ/Fritz Lab Lib 

1 Long Island Univ 

Grad Dept of Marine Sci 
David Price 

1 Delaware Univ/Math Dept 

4 Univ of Maryland 

1 Eng Lib 
1 P.F. Cunniff 
1 C.L. Sayre 
1 F. Buckley 

6 Mass Inst of Technol 
Dept of Ocean Eng 

1 P. Mandel 

1 J.R. Kerwin 

1 N. Neumann 

1 P. Leehey 

1 M. Abkowitz 
Hydro Lab 

1 A.T. Ippen 

3 Univ of Mich/Dept NAME 
1 T.F. Ogilvie 
1 H. Benford 
1 R.B. Couch 

5 Univ of Minn/ St Anthony Falls 

1 C.S. Song 
1 J.M. Killen 
1 F. Schiebe 
1 J.M. Wetzel 



Univ of Illinois 
College of Eng 
J.M. Robertson 
Theoretical & Applied Mech 



New York Univ 

1 W.J. Pierson, Jr. 
Courant Inst of Math Sci 

1 A.S. Peters 

1 J.J. Stoker 



66 



Copies 
1 



Univ of Notre Dame 
A.F. Strandhagen 

1 Penn State Univ 

Ordnance Res Lab 

1 St. John's Univ/Math Dept 
Jerome Lurye 

3 Southwest Res Inst 
1 H.N. Abramson 
1 G.E. Tansleben, Jr. 
1 Applied Mech Review 

3 Stanford Univ/Dept of Civ Eng 
1 R.L. Street 
1 B. Perry 
Dept of Aero and Astro 
1 H. Ashley 

1 Stanford Res Inst/Lib 

3 Stevens Inst of Tech 
Davidson Lab 
1 J. P. Breslin 
1 S. Tsakonas 
1 Lib 

1 Utah State Univ/Col of Eng 

Roland W. Jeppson 

2 Univ of Virginia/Aero Eng Dept 

1 J.K. Haviland 
1 Young Yoo 

2 Webb Institute 
1 E.V. Lewis 
1 L.W. Ward 

1 Worcester Poly Inst/Alden 
Res Lab 

1 Woods Hole, Ocean Eng Dept 

1 SNAME 

1 Aerojet-General 
W.C. Beckwith 

1 Bethlehem Steel Sparrows 
A.D. Haff, Tech Mgr 



Copies 

1 Bolt, Beranek & Newman, MA 

11 Boeing Company/Aerospace Group 
1 R.R. Barbar 
1 H. French 
1 R. Hatte 
1 R. Hubard 
1 F.B. Watson 
1 W.S. Rowe 
1 T.G.B. Marvin 
1 C.T. Ray 
Commercial Airplane Group 
1 Paul E. Rubber t 
1 Gary R. Saaris 

1 Cornell Aero Lab 
Applied Mech Dept 

1 Flow Research, Inc 
Frank Dvorak 

1 Eastern Res Group 

2 General Dynamics Corp 

1 Convair Aerospace Div 
A.M. Cunningham, Jr. 
MS 2851 

1 Electric Boat Div 
V.T. Boatwright, Jr. 

1 Gibbs & Cox, Inc. 

Tech Info Control Section 

1 Grumman Aircraft Eng Corp 

W.P. Carl, Mgr, Grumman Marine 

1 S.F. Hoerner 

2 Hydronautics, Inc. 

1 P. Eisenberg 
1 M.P. Tulin 

4 Lockheed Aircraft Corp 

Lockheed Missiles & Space 
1 R.L. Waid 
1 R. Lacy 
1 Robert Perkins 
1 Ray Kramer 

1 Marquardt Corp/F. Lane 

General Applied Sci Labs 



67 



Copies 

1 



Copies Code 



Martin Marietta Corp/Rias 
Peter F. Jordan 

McDonnell-Douglas Corp 
Douglas Aircraft Company 
1 A.M.O. Smith 
1 Joseph P. Giesing 



1 Newport News Shipbuilding/Lib 
1 Nielsen, NA Rockwell 

1 North American Rockwell 

Los Angeles Div Jan R. Tulinius 
Dept 056-015 

2 Northrop Corp 
Aircraft Div 

1 J.T. Gallagher 
1 J.R. Stevens 

1 Oceanics, Inc. 
Paul Kaplan 

1 Sperry Sys Mgmt 

1 Robert Taggart, Inc. 

1 Tracor 

CENTER DISTRIBUTION 
Copies Code 



1 


11 


1 


115 


1 


1151 


1 


1152 


1 


1154 


1 


15 


1 


1502 


1 


1504 


1 


1505 


1 


1506 


1 


1507 


1 


152 


1 


1521 



152 
1528 



1532 



Wen Lin 



1 


154 




1 


1541 




1 


1542 




1 


1544 




1 


1548 




3 


1552 






1 


J. McCarthy 




1 


K.P. Kerney 




50 


T. J. Langan 




1 


H.T. Wang 


3 


1556 






1 


P.K. Besch 




1 


E.P. Rood 




1 


D. Coder 


1 


156 




1 


1568 




2 


1572 






1 M. Ochi 




1 ( 


:. Lee 


1 


1576 




1 


16 




1 


167 




1 


169 1 


I. J. Engler 



17 



1 


18 


2 


1843 


1 


19 


1 


1966 


1 


273 


1 


2732 


30 


5211 


1 


5221 


1 


5222 



Y. Liu 



Report Distribution 
Library (C) 
Library (A) 



68